160 research outputs found

    Geospatial Discovery Network (GeoDN): A Large-Scale Data Mining Perspective

    Get PDF
    Lightning presentation of activities in the Earth Observation Data Science department @ German Aerospace Center related to "Large-Scale Data Mining" in the context of the DLR terrabyte initiative, application of self-supervised learning to cross-data center analytics, and climate action geodata analytics to network with academia, corporate, and governmental organizations such as MIT, Oxford University, Stony Brook University, Columbia University, the New York Academy of Sciences, NASA, NOAA, ECCC, IBM Research, and Argonne National Laboratory

    Large-Scale Geo-Data Mining for Good

    Get PDF
    The ever-increasing amount of earth observation data provides us an ample basis to sense, understand, and visualize the health of our planet. Machine learning enables us to value our home through mining massive amounts of geo-information provided by satellites and airborne measurements once curated for scalable access by a Big Geospatial Data "digital twin" platform. My presentation intends to bridge the "AI Ethics" to the "Big Data & Global Human Behavior" session through a technical overview of remote sensor technologies demonstrating their value for applications in archaeology, urban mapping, and biomass estimation relevant to various ethical aspects. I invite you to enter a vital, interdisciplinary discussion on a. How to leverage machine learning and remote sensing to improve the local climate in (mega)cities for the well-being of its urban population; and how to address ethical concerns related? b. How artificial intelligence and earth observation have the capacity to help protect the Amazon rainforest led by fair principles incorporating the "perspectives of all stakeholders" such as endangered species, local farmers, archaeologists, and governments? What are the current limitations of these technologies vis-a-vis protection of human rights and ethics; and how do we overcome limitations? c. How do we transparently implement AI-based environmental management inspired by the United Nation's Sustainable Development Goals

    Quantification of Carbon Sequestration in Urban Forests

    Get PDF
    Vegetation, trees in particular, sequester carbon by absorbing carbon dioxide from the atmosphere. However, the lack of efficient quantification methods of carbon stored in trees renders it difficult to track the process. We present an approach to estimate the carbon storage in trees based on fusing multi-spectral aerial imagery and LiDAR data to identify tree coverage, geometric shape, and tree species -- key attributes to carbon storage quantification. We demonstrate that tree species information and their three-dimensional geometric shapes can be estimated from aerial imagery in order to determine the tree's biomass. Specifically, we estimate a total of 52,00052,000 tons of carbon sequestered in trees for New York City's borough Manhattan

    Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing

    Full text link
    Self-supervised learning guided by masked image modelling, such as Masked AutoEncoder (MAE), has attracted wide attention for pretraining vision transformers in remote sensing. However, MAE tends to excessively focus on pixel details, thereby limiting the model's capacity for semantic understanding, in particular for noisy SAR images. In this paper, we explore spectral and spatial remote sensing image features as improved MAE-reconstruction targets. We first conduct a study on reconstructing various image features, all performing comparably well or better than raw pixels. Based on such observations, we propose Feature Guided Masked Autoencoder (FG-MAE): reconstructing a combination of Histograms of Oriented Graidents (HOG) and Normalized Difference Indices (NDI) for multispectral images, and reconstructing HOG for SAR images. Experimental results on three downstream tasks illustrate the effectiveness of FG-MAE with a particular boost for SAR imagery. Furthermore, we demonstrate the well-inherited scalability of FG-MAE and release a first series of pretrained vision transformers for medium resolution SAR and multispectral images.Comment: 13 pages, 8 figure

    Urban Forests for Carbon Sequestration and Heat Island Mitigation

    Get PDF
    Urban forests serve both as a carbon sequestration pool and heat island mitigation tool. Climate change will increase the frequency and severity of urban heat islands. Thus, new urban planning strategies demand our attention. Based on multimodal, remotely sensed data, we map the tree density, its carbon sequestered, and its impact on urban heat islands for Long Island, NY and Dallas, TX. Using local climate zones we investigate concepts of urban planning through optimized tree planting and adjusting building designs to mitigate urban heat islands

    Open Source Science for Large-Scale Data Mining in Earth Observation

    Get PDF
    The presentation shares to the NASA Earth System Observatory team DLR's expertise on the design of open-source-based big geo-spatial data processing platforms for remote sensing AI workloads in the context of DLR's initiative "terrabyte" which includes planned collaboration with IBM Research

    Quantification of Carbon Sequestration in Urban Forests

    Get PDF
    Vegetation, trees in particular, sequester carbon by absorbing carbon dioxide from the atmosphere, however, the lack of efficient quantification methods of carbon stored in trees renders it difficult to track the process. Here we present an approach to estimate the carbon storage in trees based on fusing multispectral aerial imagery and LiDAR data to identify tree coverage, geometric shape, and tree species, which are crucial attributes in carbon storage quantification. We demonstrate that tree species information and their three-dimensional geometric shapes can be estimated from remote imagery in order to calculate the tree's biomass. Specifically, for Manhattan, New York City, we estimate a total of 52,000 tons of carbon sequestered in trees

    Self-supervised vision transformers for joint SAR-optical representation learning

    Get PDF
    Self-supervised learning (SSL) has attracted much interest in remote sensing and Earth observation due to its ability to learn task-agnostic representations without human annotation. While most of the existing SSL works in remote sensing utilize ConvNet backbones and focus on a single modality, we explore the potential of vision transformers (ViTs) for joint SAR-optical representation learning. Based on DINO, a state-of-the-art SSL algorithm that distills knowledge from two augmented views of an input image, we combine SAR and optical imagery by concatenating all channels to a unified input. Subsequently, we randomly mask out channels of one modality as a data augmentation strategy. While training, the model gets fed optical-only, SAR-only, and SAR-optical image pairs learning both inner- and intra-modality representations. Experimental results employing the BigEarthNet-MM dataset demonstrate the benefits of both, the ViT backbones and the proposed multimodal SSL algorithm DINO-MM
    corecore